303 research outputs found

    End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners

    Full text link
    For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.Comment: to be published at ECCV 201

    Combining Path Integration and Remembered Landmarks When Navigating without Vision

    Get PDF
    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.National Institutes of Health (U.S.) (Grant T32 HD007151)National Institutes of Health (U.S.) (Grant T32 EY07133)National Institutes of Health (U.S.) (Grant F32EY019622)National Institutes of Health (U.S.) (Grant EY02857)National Institutes of Health (U.S.) (Grant EY017835-01)National Institutes of Health (U.S.) (Grant EY015616-03)United States. Department of Education (H133A011903

    Pointing errors in non-metric virtual environments

    Get PDF
    There have been suggestions that human navigation may depend on representations that have no metric, Euclidean interpretation but that hypothesis remains contentious. An alternative is that observers build a consistent 3D representation of space. Using immersive virtual reality, we measured the ability of observers to point to targets in mazes that had zero, one or three ‘wormholes’ – regions where the maze changed in configuration (invisibly). In one model, we allowed the configuration of the maze to vary to best explain the pointing data; in a second model we also allowed the local reference frame to be rotated through 90, 180 or 270 degrees. The latter model outperformed the former in the wormhole conditions, inconsistent with a Euclidean cognitive map

    Human place and response learning: navigation strategy selection, pupil size and gaze behavior.

    Get PDF
    In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge

    Modelling human visual navigation using multi-view scene reconstruction

    Get PDF
    It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision

    An analysis of waves underlying grid cell firing in the medial enthorinal cortex

    Get PDF
    Layer II stellate cells in the medial enthorinal cortex (MEC) express hyperpolarisation-activated cyclic-nucleotide-gated (HCN) channels that allow for rebound spiking via an I_h current in response to hyperpolarising synaptic input. A computational modelling study by Hasselmo [2013 Neuronal rebound spiking, resonance frequency and theta cycle skipping may contribute to grid cell firing in medial entorhinal cortex. Phil. Trans. R. Soc. B 369: 20120523] showed that an inhibitory network of such cells can support periodic travelling waves with a period that is controlled by the dynamics of the I_h current. Hasselmo has suggested that these waves can underlie the generation of grid cells, and that the known difference in I_h resonance frequency along the dorsal to ventral axis can explain the observed size and spacing between grid cell firing fields. Here we develop a biophysical spiking model within a framework that allows for analytical tractability. We combine the simplicity of integrate-and-fire neurons with a piecewise linear caricature of the gating dynamics for HCN channels to develop a spiking neural field model of MEC. Using techniques primarily drawn from the field of nonsmooth dynamical systems we show how to construct periodic travelling waves, and in particular the dispersion curve that determines how wave speed varies as a function of period. This exhibits a wide range of long wavelength solutions, reinforcing the idea that rebound spiking is a candidate mechanism for generating grid cell firing patterns. Importantly we develop a wave stability analysis to show how the maximum allowed period is controlled by the dynamical properties of the I_h current. Our theoretical work is validated by numerical simulations of the spiking model in both one and two dimensions

    Spontaneous Reorientation Is Guided by Perceived Surface Distance, Not by Image Matching Or Comparison

    Get PDF
    Humans and animals recover their sense of position and orientation using properties of the surface layout, but the processes underlying this ability are disputed. Although behavioral and neurophysiological experiments on animals long have suggested that reorientation depends on representations of surface distance, recent experiments on young children join experimental studies and computational models of animal navigation to suggest that reorientation depends either on processing of any continuous perceptual variables or on matching of 2D, depthless images of the landscape. We tested the surface distance hypothesis against these alternatives through studies of children, using environments whose 3D shape and 2D image properties were arranged to enhance or cancel impressions of depth. In the absence of training, children reoriented by subtle differences in perceived surface distance under conditions that challenge current models of 2D-image matching or comparison processes. We provide evidence that children’s spontaneous navigation depends on representations of 3D layout geometry.National Institutes of Health (U.S.) (Grant HD 23103

    Accessing ns–μs side chain dynamics in ubiquitin with methyl RDCs

    Get PDF
    This study presents the first application of the model-free analysis (MFA) (Meiler in J Am Chem Soc 123:6098–6107, 2001; Lakomek in J Biomol NMR 34:101–115, 2006) to methyl group RDCs measured in 13 different alignment media in order to describe their supra-τc dynamics in ubiquitin. Our results indicate that methyl groups vary from rigid to very mobile with good correlation to residue type, distance to backbone and solvent exposure, and that considerable additional dynamics are effective at rates slower than the correlation time τc. In fact, the average amplitude of motion expressed in terms of order parameters S2 associated with the supra-τc window brings evidence to the existence of fluctuations contributing as much additional mobility as those already present in the faster ps-ns time scale measured from relaxation data. Comparison to previous results on ubiquitin demonstrates that the RDC-derived order parameters are dominated both by rotameric interconversions and faster libration-type motions around equilibrium positions. They match best with those derived from a combined J-coupling and residual dipolar coupling approach (Chou in J Am Chem Soc 125:8959–8966, 2003) taking backbone motion into account. In order to appreciate the dynamic scale of side chains over the entire protein, the methyl group order parameters are compared to existing dynamic ensembles of ubiquitin. Of those recently published, the broadest one, namely the EROS ensemble (Lange in Science 320:1471–1475, 2008), fits the collection of methyl group order parameters presented here best. Last, we used the MFA-derived averaged spherical harmonics to perform highly-parameterized rotameric searches of the side chains conformation and find expanded rotamer distributions with excellent fit to our data. These rotamer distributions suggest the presence of concerted motions along the side chains
    corecore